13 research outputs found

    Blocked-based Solidity — a Service for Graphically Creating the Smart Contracts in Solidity Programming Language

    Get PDF
    In the last few years, we can observe a constantly increasing interest in systems and applications based on blockchain technology. Undoubtedly, this fact was significantly influenced by the introduction of the smart contracts mechanism that is one of the most popular features of blockchain nowadays and can be used across almost any industry. Smart contracts are programs stored on a blockchain that run when predetermined conditions are met. Since programming smart contracts is not trivial, this paper proposes a service that enables their creation by constructing diagrams from graphical blocks. The diagrams are then transformed into a smart contract code written in the Solidity language. The paper presents the general idea of the proposed service and selected use cases illustrating its application

    A Survey on Intrusion Detection Systems for Fog and Cloud Computing

    Get PDF
    The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there is an absolute need to differentiate boundaries in personal information and cloud and fog computing globally and the adoption of specific information security policies and regulations. The goal of the security policy and framework for cloud and fog computing is to protect the end-users and their information, reduce task-based operations, aid in compliance, and create standards for expected user actions, all of which are based on the use of established rules for cloud computing. Moreover, intrusion detection systems are widely adopted solutions to monitor and analyze network traffic and detect anomalies that can help identify ongoing adversarial activities, trigger alerts, and automatically block traffic from hostile sources. This survey paper analyzes factors, including the application of technologies and techniques, which can enable the deployment of security policy on fog and cloud computing successfully. The paper focuses on a Software-as-a-Service (SaaS) and intrusion detection, which provides an effective and resilient system structure for users and organizations. Our survey aims to provide a framework for a cloud and fog computing security policy, while addressing the required security tools, policies, and services, particularly for cloud and fog environments for organizational adoption. While developing the essential linkage between requirements, legal aspects, analyzing techniques and systems to reduce intrusion detection, we recommend the strategies for cloud and fog computing security policies. The paper develops structured guidelines for ways in which organizations can adopt and audit the security of their systems as security is an essential component of their systems and presents an agile current state-of-the-art review of intrusion detection systems and their principles. Functionalities and techniques for developing these defense mechanisms are considered, along with concrete products utilized in operational systems. Finally, we discuss evaluation criteria and open-ended challenges in this area

    Replication of Recovery Log — An Approach to Enhance SOA Reliability

    No full text
    International audienceAlong with development of SOA systems, their requirements in terms of fault-tolerance increase and become more stringent. To improve reliability of SOA-based systems and applications, a ReServE service, providing an external support of web services recovery, has been designed. In this paper we propose to enhance the resilience of ReServE by replication of log with recovery information, and address problems related to deployment of this solution

    Checkpointing and Rollback-Recovery Protocol for Mobile Systems with MW Session Guarantee ∗

    No full text
    In the mobile environment, weak consistency replication of shared data is the key to obtaining high data availability, good access performance, and good scalability. Therefore new class of consistency models, called session guarantees, recommended for mobile environment, has been introduced. Session guarantees, called also client-centric consistency models, have been proposed to define required properties of the system regarding consistency from the client’s point of view. Unfortunately, none of proposed consistency protocols providing session guarantees is resistant to server failures. Therefore, in this paper checkpointing and rollbackrecovery protocol rVsMW, which preserves Monotonic Writes session guarantee is presented. The recovery protocol is integrated with the underlying consistency protocol by integrating operations of taking checkpoints with coherence operations of VsSG protocol. 1

    Towards Relaxed Rollback-Recovery Consistency in SOA

    No full text
    Part 1: Systems, Networks and ArchitecturesInternational audienceNowadays, one of the major paradigms of distributed processing is SOA. To improve the reliability of SOA-based systems, a ReServE service that ensures recovery of consistent processing state, has been proposed. ReServE introduces a high overhead during failure-free computing. Thus, in this paper we propose relaxed recovery consistency models that allow optimization of rollback-recovery in SOA. We propose their formal definitions, and discuss the conditions under which these models are provided by ReServE

    D.: Version vector protocols implementing session guarantees

    No full text
    This paper analyses different protocols of session guarantees. Session guarantees (also known as client-centric consistency models) are one of the class of consistency models of replicated shared data, besides data-centric consistency models. The presentations comprises details of the data structures for the information maintained locally and passed to check consistency conditions, as well as the algorithms to process the information. The protocols are also discussed with respect to accuracy and data overhead
    corecore